Dual subgradient algorithms for large-scale nonsmooth learning problems

نویسندگان

  • Bruce Cox
  • Anatoli Juditsky
  • Arkadi Nemirovski
چکیده

Classical” First Order (FO) algorithms of convex optimization, such as Mirror Descent algorithm or Nesterov’s optimal algorithm of smooth convex optimization, are well known to have optimal (theoretical) complexity estimates which do not depend on the problem dimension. However, to attain the optimality, the domain of the problem should admit a “good proximal setup”. The latter essentially means that (1) the problem domain should satisfy certain geometric conditions of “favorable geometry”, and (2) the practical use of these methods is conditioned by our ability to compute at a moderate cost proximal transformation at each iteration. More often than not these two conditions are satisfied in optimization problems arising in computational learning, what explains why proximal type FO methods recently became methods of choice when solving various learning problems. Yet, they meet their limits in several important problems such as multi-task learning with large number of tasks, where the problem domain does not exhibit favorable geometry, and learning and matrix completion problems with nuclear norm constraint, when the numerical cost of computing proximal transformation becomes prohibitive in large-scale problems. We propose a novel approach to solving nonsmooth optimization problems arising in Research of the third author was supported by the ONR Grant N000140811104 and the NSF Grants DMS 0914785, CMMI 1232623. B. Cox US Air Force, Washington, DC, USA A. Juditsky (B) LJK, Université J. Fourier, B.P. 53, 38041 Grenoble Cedex 9, France e-mail: [email protected] A. Nemirovski Georgia Institute of Technology, Atlanta, GA 30332, USA e-mail: [email protected]

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Duality between subgradient and conditional gradient methods

Given a convex optimization problem and its dual, there are many possible firstorder algorithms. In this paper, we show the equivalence between mirror descent algorithms and algorithms generalizing the conditional gradient method. This is done through convex duality and implies notably that for certain problems, such as for supervised machine learning problems with nonsmooth losses or problems ...

متن کامل

Optimal subgradient algorithms with application to large-scale linear inverse problems

This study addresses some algorithms for solving structured unconstrained convex optimization problems using first-order information where the underlying function includes high-dimensional data. The primary aim is to develop an implementable algorithmic framework for solving problems with multiterm composite objective functions involving linear mappings using the optimal subgradient algorithm, ...

متن کامل

Primal-dual subgradient methods for convex problems

In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primaldual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable stopping criterion. T...

متن کامل

Randomized Block Subgradient Methods for Convex Nonsmooth and Stochastic Optimization

Block coordinate descent methods and stochastic subgradient methods have been extensively studied in optimization and machine learning. By combining randomized block sampling with stochastic subgradient methods based on dual averaging ([22, 36]), we present stochastic block dual averaging (SBDA)—a novel class of block subgradient methods for convex nonsmooth and stochastic optimization. SBDA re...

متن کامل

Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods

We present a unifying framework for nonsmooth convex minimization bringing together -subgradient algorithms and methods for the convex feasibility problem. This development is a natural step for -subgradient methods in the direction of constrained optimization since the Euclidean projection frequently required in such methods is replaced by an approximate projection, which is often easier to co...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Math. Program.

دوره 148  شماره 

صفحات  -

تاریخ انتشار 2014